153 research outputs found
Question Answering with Subgraph Embeddings
This paper presents a system which learns to answer questions on a broad
range of topics from a knowledge base using few hand-crafted features. Our
model learns low-dimensional embeddings of words and knowledge base
constituents; these representations are used to score natural language
questions against candidate answers. Training our system using pairs of
questions and structured representations of their answers, and pairs of
question paraphrases, yields competitive results on a competitive benchmark of
the literature
Memory Networks
We describe a new class of learning models called memory networks. Memory
networks reason with inference components combined with a long-term memory
component; they learn how to use these jointly. The long-term memory can be
read and written to, with the goal of using it for prediction. We investigate
these models in the context of question answering (QA) where the long-term
memory effectively acts as a (dynamic) knowledge base, and the output is a
textual response. We evaluate them on a large-scale QA task, and a smaller, but
more complex, toy task generated from a simulated world. In the latter, we show
the reasoning power of such models by chaining multiple supporting sentences to
answer questions that require understanding the intension of verbs
Reading Wikipedia to Answer Open-Domain Questions
This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task.Comment: ACL2017, 10 page
Connecting Language and Knowledge Bases with Embedding Models for Relation Extraction
This paper proposes a novel approach for relation extraction from free text
which is trained to jointly use information from the text and from existing
knowledge. Our model is based on two scoring functions that operate by learning
low-dimensional embeddings of words and of entities and relationships from a
knowledge base. We empirically show on New York Times articles aligned with
Freebase relations that our approach is able to efficiently use the extra
information provided by a large subset of Freebase data (4M entities, 23k
relationships) to improve over existing methods that rely on text features
alone
Experimental observation of a strong mean flow induced by internal gravity waves
We report the experimental observation of a robust horizontal mean flow
induced by internal gravity waves. A wave beam is forced at the lateral
boundary of a tank filled with a linearly stratified fluid initially at rest.
After a transient regime, a strong jet appears in the wave beam, with
horizontal recirculations outside the wave beam. We present a simple physical
mechanism predicting the growth rate of the mean flow and its initial spatial
structure. We find good agreement with experimental results
- …